# Efficient GGUF inference
Wan2.1 14B VACE GGUF
Apache-2.0
The GGUF format version of the Wan2.1-VACE-14B model, mainly used for text-to-video generation tasks.
Text-to-Video
W
QuantStack
146.36k
139
Codellama 7B Python GGUF
CodeLlama 7B Python is a large language model with 7B parameters developed by Meta, focusing on Python code generation. It provides a quantized version in the GGUF format.
Large Language Model
Transformers

C
TheBloke
2,385
57
Featured Recommended AI Models